perm filename HEWITT.TEX[F89,JMC] blob
sn#880243 filedate 1989-12-10 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 % -*- Mode: TeX Default-character-style: (SAGE:SANS-SERIF-BODY :BOLD :NORMAL) -*-
C00112 ENDMK
C⊗;
% -*- Mode: TeX; Default-character-style: (SAGE:SANS-SERIF-BODY :BOLD :NORMAL) -*-
\documentstyle[11pt]{article}
\def\startline{\par\nobreak\noindent}
{\obeylines\obeyspaces\gdef\procedure{\bigbreak\begingroup
\parindent=0pt \parskip=0pt plus0pt minus0pt
\obeylines \obeyspaces \eg\let↑↑M=\startline \eg}
\gdef\endproc{\par\endgroup\bigbreak}}
\def\wyn#1{$\spadesuit${\tt #1}}
\newcommand\code[1]{\mbox{\eg{#1}}}
\newcommand\eol{\hfill\break}
\newcommand\bline{\vskip12pt plus0pt minus0pt}
\newcommand\eg{\tt}
\title{The Role of Artificial Intelligence\\in\\Open Systems Science}
\author{\copyright\ 1989 Carl Hewitt}
\date{Draft of \today}
\begin{document}
% \bibliographystyle{plain}
% \bibliography{/home/tx/wsnow/biblio}
\maketitle
\section{Abstract}
Open Systems Science is the technology and science of large scale
information systems work. Examples of this kind of work
include flexible semiconductor manufacturing, constructing a permanent
space station, and the software engineering of an international
electronic fund transfer systems. Open Systems Science
addresses issues of systems commitments, robustness, and scalability in
large scale information systems work. In contrast, Artificial Intelligence
is the science and technology of intelligent agents and robots.
Open Systems science extends classical Artificial Intelligence by
introducing methods from the sociology of science, organizations theory,
and the theory of concurrent systems.
Artificial Intelligence has created new technologies of taxonomies,
inference-based systems, and problem spaces. All of these
technologies can be useful within systems that engage in large-scale
work. However, each of these technologies is useless without the
extensive systems support that is necessary to make it work.
\section{Introduction}
All large-scale Open Systems are concurrent, asynchronous, decentralized,
and indeterminate. They are composed of numerous participants which
operate {\em concurrently\/} in order to accomplish the multitude of
tasks that are performed. They are {\em distributed\/} in order to deal
with the influx of information from many sources and to convey
information to the places where it is needed.
In any short span of time, each participant of a large-scale Open System
operates {\em autonomously\/} and {\em asynchronously\/} in accordance
with its own local needs and procedures. No truly simultaneous change of
all the participants in a large-scale Open System is ever possible, and new
information may arrive from any source at any time. Thus in general, one
participant of the system will start using new information before it
reaches the others.
Furthermore, asynchronous operation means that any large-scale Open
System is {\em indeterminate\/} in a physical sense: it does not have a
determinable current state which (together with new information that
arrives) determines its future operation. In fact, attempts to pin down
an instantaneous state of a large system by gathering more information
about the finest details of its internal operation makes the system {\em
more\/} indeterminate, because gathering the information affects its
operation. Furthermore, another large-scale Open System is needed just
to gather, interpret, and store the information about the smallest-scale
activities of another large system.
\subsection{An Illustration of Indeterminacy}
Consider a shared financial account which is accessible from multiple
sites using electronic funds transfer. For concretness consier an
implementation of the shared account discussed above in our actor core
language. Each actor behavior script has message handlers (indicated
by \code{=>} below); one of these handlers will be applicable to the
incoming communication.
Changing behavior in actors is captured by the concept of {\em
replacement behavior}. In our actor core language a replacement behavior
can be specified by a change ind the parameters of the same behavior
by using a ``{\em ready}'' command.
The code for a communication handler to process withdrawals in an account
is as follows:\footnote{A communication handler is also called a
``method'' or ``virtual procedure'' in object-oriented languages.}
\procedure %
(DefName Account
(Behavior [balance owner]
(--> [(Any Withdrawal amount) \&Serializer]
(If (>= balance amount)
{\sl Withdraw amount requested}
(Then
(Let \{[newBalance = (- balance amount)]\}
{\sl let \code{newBalance} be balance less withdrawal}
(Ready Serializer [balance = newBalance])
{\sl Account is ready for the next message}
(Return (Create WithdrawalReceipt amount owner newBalance))))
(Else
(Ready Serializer)
{\sl Account is ready for the next message}
(Complain (Create OverDraftNotice amount owner balance)))))))
\endproc
A new account can be created with balance \code{100M} and owner \code{Clark}
can be creatd and bound to an identifier named \code{Acct1} as follows:
\procedure %
(DefName Acct1 (Create Account 100M Clark))
\endproc
\noindent
Suppose that Ueda and Shapiro need to share access to Clark's account. The
following commands give them the ability to communicate with \code{Acct1}:
\procedure %
(Send Ueda Acct1)
(Send Shapiro Acct1)
\endproc
\noindent
Now if Ueda attempts to withdraw 70M from \code{Acct1} using
\code{(Send Acct1 (Create Withdrawal 70M))}, while concurrently Shapiro
attempts to withdraw 80M, then the operation of the account will be
serialized so that one of them will get a withdrawal receipt and the
other an overdraft complaint.
A denotational semantics for actor languages based on system
configurations has been developed in Actor Theory which provides a meaning for
the scripts of actor programming languages, obtained recursively by
analyzing the script as a system of communicating actors
\cite{theriault-masters}. It is true that some
types of reasoning about systems implemented in actor languages can be
carried out using a declarative reading of the programs. Such
reasoning is a special case of actor {\em serializer
induction\/}\cite{serializers}.
However, it is physically indeterminate who will have theri witdrwaw
reequest honored. No amount of knowledge of the physical
circumstances in which the withdrawal requests are mad determines the
outcome. Therefore the outcome cannot be deductively determined from
knowledge of the circumstances of the withdrawal request.
\subsection{Advantages of Open Systems}
Open Systems provide environment in which participants can be more
self reliant in the following respects:
\begin{itemize}
\item
{\bf Asynchrony:} enables each participant to operate as
quickly as possible, given local circumstances. Otherwise, the pace of
the activity of each participant would be locked to a single
scheduler down to the lowest level activities.
\item
{\bf Local autonomy:} enables each participant to react
immediately to changing circumstances. Otherwise, each participant would
have to consult a single decision maker for each decision.
\item
{\bf Late-arriving information:} enables participants to increase the
effectiveness of their decision making by taking new information into
account as it arrives. For example new geological fault lines might
be discovered under a nuclear reactor after it has been constructed.
\item
{\bf Multiple authorities:} increases pluralism,
diversity, and robustness. For example, the engineering department of a
utility wants to build a new kind of nuclear reactor which is inherently
safer than existing reactors, while the finance department maintains that
the financial risk is too great. Creativity is needed to bridge
these differences.
\item
{\bf Arm's length relationships:} enables actors to conceal their internal
activities from other other actors. For example, the Nuclear Regulatory
Commission operates at arm's length from nuclear utilities that it
regulates in order to be more effective in detecting and prosecuting
violations of its regulations.
\item
{\bf Division of labor:} specialization of functionality can increase
system effectiveness by enabling each participant to concentrate
and focus its efforts on a narrow range of goals. For example, an
electric utility has separate finance and engineering suborganizations,
each specializing in different functions: finance takes care of raising
the money, and engineering directs the construction of power plants.
\end{itemize}
\section{Trials of Strength}
Any situation in which forces are pulling in different directions
constitutes a {\em trial of strength}. All trials of strength are local in
the sense that they occur at a particular time and place among local
participants. For example negotiation provides a mechanism for
representatives of different participants to come together in a trial of
strength.
\subsection{Commitments}
A {\em commitment\/} is a course of action, usually involving several
actors, which the actors are {\em on course\/} to carry through. An
actor's {\em role\/} is its part in the commitment.
Commitments are used in this way in order to avoid becoming embroiled
in issues of {\em intentionality\/}. Instead, an actor is said to be {\em
on course\/} or {\em off course\/} with respect to an activity. This
terminology allows inanimate objects to participate in commitments.
Conflict is a trial of strength in which participants have
incompatible commitments. As we have seen, conflict is a fundamental
aspect of any Open System for a variety of reasons. Resources are
finite and limited; choices must be made about how to use them. In
addition, most organizations have built-in checks and balances which
more or less deliberately generate potentially conflicting
commitments. The commitment of the safety department to safe
conservative procedures conflicts with the need for the engineering
department to lower costs of construction. This often places them in
conflict with one another.
Sometimes the commitments of the affected participants are compatible
and there is no conflict. One participant says, ``Let's do it this way,''
and everybody agrees, so the negotiation is a trivial one, but there is
always the {\em potential\/} for conflict. No one can be certain in
advance about the outcome, so any negotiation is to that extent a trial
of strength and therefore indeterminate.
Our new discipline of Open Systems Science needs to have an
intimate understanding of the nature of commitments. In particular,
Open Systems Science is especially concerned with the nature of
systems commitments and how they relate to the commitments of
participants inside and outside a system, because the broader
systems commitments are what make large-scale projects possible.
A systems is one in which a system is on course
to carry out. For example, a utility can have an organizational
commitment to build a new power plant. The utility's Finance department
has a commitment to fund the cost of the plant, and its Engineering
department has a commitment to design and build it. Both Finance and
Engineering must work together in a fairly detailed way, and the
organizational commitment must address the specialized needs and
commitments of both participants. The organizational commitment of the
utility goes beyond the responsibilities of Financing, Engineering, and
other departments. The utility has organizational commitments and
authority that go beyond just the individual ones of its departments and
members.
We said before that a participant's responsibility for a commitment is
its part in a commitment. Alternative courses of action can affect
participant use of resources which is often a source of conflict. There
are several kinds of resources: money, space/time/material,
mechanism/technology, and sentiment.
Money and space/time/material are self-explanatory. Mechanism/\break
technology is anything that transforms the world: for example, a nuclear
power plant transforms radioactive material into electricity. Sentiment
deals with how various actors feel about each other: goodwill,
reputation, obligation, etc. For example, when a public utility says
that its reactor is safe, it is staking part of its reputation on that
statement. Also, when a utility asks for leniency on an issue from a
regulatory board, it is putting to use some of its goodwill for the
utility, which means that it is inhibited in its freedom later on.
However, the past is really gone and the future is never here. Both
planning for the future and reflecting upon past experience are
activities that take place in the present. In addition to the daily,
familiar use of resources, some activities which look to the future and
the past are also commitments. For example, planning can create a
commitment; it chooses a particular course of action that allocates
resources in one way instead of another and sets the planner on course to
keep the commitment. Creating a financial record also generates a
commitment. When a utility submits its annual report, it makes a
statement which says how much money it earned that year. Once it reports
this statement to the regulators, that's a commitment. If the regulators
come and investigate, the utility has to substantiate its claim that it
really did earn that much during the year.
Two commitments are said to be {\em conflicting\/} if they give rise to a
trial of strength which results in at least one of the commitments not
being kept. For example, the commitment of a utility to operate a nuclear
power plant is in conflict with the commitment of an environmental group
to shut it down. These two commitments are incompatible, and cannot both
be kept.
The notion of commitment used in this paper is closely related the
terminology used by those sociologists who emphasize a commitment as a
choice among incompatible alternatives
\cite{concept-of-commitment,quality-of-life}. In this terminology, {\em
choice\/} is analyzed in terms of trials of strength. Commitments have
been discussed by Winograd and Flores in their discussion of Heiddiger's
notion of ``throwness'' \cite{winograd-flores}, and by Richard Fikes
\cite{commitment-based-framework} in the context of contract nets as
developed by Reid Smith and Randy Davis \cite{contract-nets}.
% also \cite{improvised-news}.
% CH NOTE: Get {\em Improvised News\/} by Tom Shibutani, study of
% rumor in organizations, citation goes in structure of commitments.
\subsection{Open Systems Semantics}
Open Systems Semantics is the study of the meaning of Open Systems
Action. Taking any action entails changes in commitments, and that
change is the meaning of the action. This is an open-world
characterization of meaning---as opposed to previous, closed-world
attempts based on possible worlds in which the meaning of a set of
sentences is defined to be the set of all possible worlds that
satisfy the meaning conditions. Building on previous actor theory, Gul Agha
\cite{agha-phd} has developed an
open-world, mathematical semantics for concurrent systems in which the
meaning of an action is characterized by its effect on the evolution of
the system.
In general, Open Systems Action involves conflict---and therefore
indeterminacy. Current situations and events influence, but do not
determine, the future because the outcomes of numerous trials of
strength cannot be known and some trials of strength cannot even be
anticipated.
In logical semantics, representation is the mapping between a sentence
proposition and specified meaning conditions. Meaning is built on and
grows out of representation. Two participants agree about the meaning of a
sentence when they agree about the meaning conditions.
In Open System semantics, however, representation is taken to be how
effects commitments. The important difference is conveyed in the
following Open Systems Semantics slogan:
\begin{quote}
{\Large {\bf No representation without communication!}}
\end{quote}
The test of the degree to which one participant has adequately
communicated its meaning to another participant is: whether the recipient's
commitments change in the way which constitutes the meaning.
Large organizations have extensive policies, procedures, and regulations
to control the meaning of organizational actions.
Open Systems Semantics is a research programme
\cite{methodology-of-research-programmes} for studying the meaning of
Open Systems Actions. Just as there is no global synchrony or cause
and effect, the meaning of an Open Systems Action is also localized: it
begins in the participants at a particular time and place and then
can causally propagates to wider contexts.
\subsection{Requirements of Open Systems}
Analysis of commitment relationships can be used to understand how the
characteristics of Open Systems engender requirements for Open
Systems implementations:
\begin{itemize}
\item
{\bf Asynchrony:} produces indeterminacy because arriving information
must be integrated with local information. The new information can
generate new commitments that conflict with pre-existing commitments.
\item
{\bf Local autonomy:} produces conflict because as systems
commitments change, some of the new commitments will be incompatible with
pre-existing local commitments.
\item
{\bf Late-arriving information:} can produce conflict when it arrives at
an advanced stage of processing (e.g., an engineering group reports new
concerns about the capabilities to survive an earthquake just as the
utility is about to apply for a license to operate the reactor).
\item
{\bf Multiple authorities and division of labor:} can produce conflict
because the specialized commitments of multiple authorities may be
incompatible and come into conflict.
\item
{\bf Arm's length relationships:} can produce conflict because the
internal commitments of other systems are not visible. This can
increase the severity of conflict because other systems may develop
entrenched incompatible commitments before the conflict is discovered.
\end{itemize}
Another underlying constraint is {\em continuous operation}. In many
cases a system cannot ``take a vacation'' in order to get itself
into better shape. It must continue operating though perhaps
at some reduced level of performance.
\subsection{Dealing with Unanticipated Conflict}
Robustness is keeping commitments in the face of unanticipated conflict.
Keeping a commitment often entails keeping subcommitments. For
example, the commitment to operate the Diablo Canyon reactor requires
keeping two subcommitments: constructing the plant and licensing it.
The licensing subcommitment might go smoothly at first and then run
into unanticipated conflict when seismologists discover new geological
faults near the location of the reactor. The robustness of the
commitment of the utility in part depends upon its ability to deal with
whatever unexpected trial of strength arises. In this case
the utility engages in a negotiation (i.e. a trial of strength)
with the Nuclear Regulatory Commission.
\section{Negotiation}
Negotiation is an important way in which a system achieves
robustness (i.e. keeping commitments in the face of conflict). A
negotiation is a discourse in which the participants make
representations held in common which can engender joint actions that
are new joint commitments. The new commitments will often in turn
lead to more conflict with pre-existing commitments of individual
participants as they make adjustments for the new commitments.
An organizational commitment of a utility to operate a nuclear power
plant leads to giving its Finance and Engineering departments more
specialized subcommitments. Engineering has the responsibility to
construct the reactor and Finance has the responsibility to raise the
money for construction. Negotiation gives both participants an
opportunity to negotiate how the utility can keep its commitment to
operate the plant. The commitment of Engineering is represented in its
consruction plans and the commitment of Finance in represented in its
financial plan. Each is committed to the others commitment.
Participants come together to work out a systems response to potential
conflict. Each participant has ongoing commitments to attend to, and
needs a way of figuring out how to allocate its resources. Instead of
being required to turn its full attention to the new commitment, each
participant can separate out a subpart which will be devoted to the
new commitment.
Negotiation can help a system to meet its existing commitments
while developing new ones---which is important for attaining
robustness. Negotiation also creates overall, systems
commitments---which are essential to scaling. Thus, systems to
support mechanisms that can move a negotiation forward, and determine
what progress is being made.
Insights gleaned from the social sciences (law, sociology,
anthropology, organizations theory, and philosophy of science) can help
us create systems that support systems commitments, robustness, and
scalability. Human organizations have evolved methods for dealing
with conflict---and for turning them into strengths instead of
weaknesses. These methods can be adapted as a source of inspiration
for robustness in human/telecomputer systems.
\subsection{What Happens During a Negotiation}
Each participant brings its own commitments to a negotiation. These
include decision-making criteria, such as preferences among predicted
outcomes. For example: ``It is preferable to have nuclear power plants
because they lessen our dependence on unstable foreign fuel supplies''
and ``It is preferable not to have nuclear power plants because they
create a threat of the release of significant amounts of radioactivity''
Conflicts among these preferences can be negotiated
\cite{negotiations}.
During a negotiation, the participants can make moves---where each move changes its commitments. In this way, the various participants can arrive
at a joint commitment about the issues being addressed and the options
available for addressing those issues.
The various participants {\em clarify the issue\/} by discussing and
commenting upon each others' statements about the issue. This process
may expand or change the views of the various participants about the nature of
the issue. The representatives also discuss what the various {\em
options\/} are with respect to the commitments around this issue. An
option is a proposal for the rearrangement of systems
commitments. Here, discussion and commentary about options can lead to
the generation of new options, further clarification of the nature of the
issue, and further commentary on the commitments that are affected by the
issue.
\subsection{Contradictions}
{\em Contradictions\/} arise because each negotiating participant
attempts to keep its own commitments. When conflict leads to a
negotiation, each participant deliberately resists some of the
statements that other participants make to support their commitments.
Each participant uses language as a tool to further its own
commitments, and that often produces resistance and even contradictory
statements.
The expression of conflict can be a very positive aspect of negotiation.
The diversity that produced the conflict and contradictions can also
produce new ideas, suggestions, and options.
\subsection{Against Bureaucracy}
Open Systems Science needs to provide methods to keep systems from
acting bureacratically in the following ways:
\begin{itemize}
\item The rigid application of rules.
\item The arbitrary application of authority.
\end{itemize}
If systems are continually potentially involved in negotiations, how
do they ever get anything accomplished? Won't bringing negotiation to
human/telcomputer systems create even worse messes than in human
systems? While it's true that negotiations can break down and
result in deadlocks, they can also come up with creative solutions.
Two factors that help ``grease the wheels'' of negotiation are
cooperation and allies.
\subsection{Cooperation}
{\em Cooperation\/} is the process by which participants become committed to
each others' commitments. For example, Finance and Engineering might
both be committed to building a new power plant. Finance is committed to
raising the money to build the plant, and Engineering is committed to a
schedule for constructing the plant. Finance relies on Engineering's
commitment to build the plant on schedule in order to have credibility in
the financial marketplace---so Finance is committed to Engineering's
commitment. In a similar fashion, Engineering relies on Finance's
ability to provide the money to pay the construction cost as it comes due
so that construction can continue. So Engineering is committed to
Finance's commitment. Because we have these cross-commitments, we say
that Finance and Engineering are cooperating.
Another example deals with a utility and vendor. The utility generates a
purchase order for product $Q$. The vendor commits itself to shipping
product $Q$ at a certain time. The utility's commitment to the vendor's
shipping product $Q$ is represented by the purchase order. The vendor's
commitment to ship the product is dependent on the utility's commitment
to pay---as represented by the purchase order. Thus the purchase order
formalizes the cooperation between customer and vendor, and represents
the commitment of both participants to the others' commitment.
This process of mutual commitment---cooperation---is of fundamental
importance.
\subsection{Allies}
Another aspect of negotiation that helps make the negotiation process go
smoothly is the notion of an {\em ally}. One of the participants may claim an
ally: i.e., predict that in some future trial of strength, its ally would
behave in a certain way. In the payroll example, someone might claim
that if management doesn't allocate sufficient funds to meet the payroll,
the union will call for a strike!
The ally may prove to be faithful or unfaithful: faithful in the sense
that in the future trial of strength, it actually does behave as claimed.
\begin{quote}
{\em
\vbox{Glendower: I can call spirits from the vast deep.
Hotspur: Why, so can I, or so can any man; but will they come when you
do call for them?
\hfill---Shakespeare: {\em Henry IV}, {\em Act III}, {\em Scene 1}.}}
\end{quote}
Thus, when a negotiating participant claims an ally, the likely outcomes of
future trials of strength must be considered in deciding on what action
to take. In this way, claiming the support of allies can sway a
negotiation in a certain direction, but it is not necessarily decisive.
For example, a customer might invoke the Public Utility Board as an ally
when presenting a complaint to a utility that refuses to remove excess
charges from its bill. The utility needs to consider the implications of
an Utility Board investigation in deciding how to respond to the
customer.
Claiming an ally can be a very one-sided relationship. If the utility
decides to honor the customer's claim about the probable outcome of a
Utility Board investigation, then the Utility Board would never hear
about the incident, even though it was successfully invoked as an ally.
By contrast, an alliance is a {\em mutual commitment}. The participants to an
alliance make mutual commitments of their time, money, staff, and other
resources. Alliances are important outcomes of negotiation. Almost
every negotiation will affect alliances, either by creating new ones or
by strengthening, adjusting, or weakening old ones.
\section{Outcomes of Negotiation}
Many kinds of outcomes are possible, but the following three often
occur:
\begin{itemize}
\item
A {\bf resolution} to which the participants commit themselves.
\item
A {\bf deadlock} in which the participants at this particular negotiation
cannot reach an agreement. Quite often as a result of deadlock, another
negotiation is held with different representatives, and on a different
issue: namely, the fact that the other negotiation deadlocked. ``Those
guys didn't work it out, what are we going to do about it?''
\item
An {\bf appeal}. Some of the representatives might be unhappy about
the outcome and appeal to other participants---which might set up
another negotiation to deal with the issue of what to do about the
outcome of the previous negotiation.
\end{itemize}
Participants can be stalemated in conflict for a long period of time. For
example, an environmental group can work for decades attempting to revoke
the operating license of a nuclear power plant. Maintaining a
negotiation does consume resources, however. One has to keep track of
the other participant's position, plan strategy of how to continue carrying out
the negotiation, respond to the other participant's moves all the time, and so
on. Actions like these consume time, communications, storage space, and
other resources that would otherwise be put to different use. Thus,
maintaining a negotiation is a commitment.
In some cases, a negotiation ends when one participant runs out of the resources
needed to continue. In other cases, the process explicitly provides a back-up
procedure in the sense that it leaves the conflict potentially resumable, but
ends the current negotiation. An example of this would be a state that is
determined to oppose a public utility in its attempt to operate a nuclear
powerplant. The state can oppose the utility at every stage of its attempt to
get an operating license. Suppose that at each stage the negotiation is
broken off when the Nuclear Regulatory Commission decides in favor of the
utility. After each hearing the resources committed to the negotiation are
allowed to go their own way, with the intention that a whole new negotiation
might be convened at another time. Finally just before the nuclear power
plant goes into operation, the state might ``win'' the negotiation by offering
to compensate the utility for what it would earn by operating the nuclear plant
{\em provided} that it agrees to sell the plant to the state for \$1!
\subsection{Negotiations Create Commitments}
Negotiations are important, even if no conflict emerges, because they
create systems commitments that go beyond the individual
commitments of the participants involved. Negotiations always have multiple
possible outcomes. Choices are made during a negotiation, which may
result in the creation of an systems commitment: the participants agree
on a particular course of joint action. The negotiation might prove to
be a trivial one in which agreement is easily reached, but the outcome
still represents an systems commitment. Late-arriving information
could have caused one of the participants to strive for a different outcome.
The significance of negotiations lies in their outcomes and the way
those outcomes affect other systems actions. For example a nation
will incrementally develop an electric power industry---and that
industry will influence energy costs, pollution levels, generating
capacity, etc. In the case of a utility constructing a nuclear power
plant with two reactors, Engineering and Finance can have a dispute as
to whether to construct both reactors concurrently, as opposed to
finishing one before starting the next. Engineering prefers building
both at once because it can overlap similar activities to bring down
the cost. Finance prefers building them sequentially because the
financial burden and risk is less. The dispute between Finance and
Engineering will have an outcome in terms of the utility's
profitability.
\subsection{Negotiation is Creative}
Negotiation is intrinsically creative. Often, the outcome is not as
predicted, or is unintended by participants, or may even be unwanted by
some participants. On the other hand, an outcome may turn out to be
better than expected. Even when a negotiation does not break new ground
and the outcome is one of those initially sought by one or more participants,
the process used to reach that outcome is fundamentally creative in the
sense that it creates an systems commitment.
As we have seen, trials of strength embody conflict (because of
incompatible outcomes), and therefore indeterminacy (because no participant can
be certain what the outcome will be). Trials of strength are the
fundamental unit of activity that we want to understand and explore. The
actual unfolding of a trial of strength is a unique performance, so
strictly speaking, a trial of strength can never be repeated. A similar
one could be staged at a different place and time, but each performance
is unique.
This cycle of commitments leading to negotiations which lead to
commitments, some of which conflict with other commitments and thus lead
to further negotiations---this cycle is the way the world works.
\subsection{The Rationale}
The {\em rationale\/} for the outcome of a negotiation is stated at the
end of the negotiation. The rationale(s) given for the outcome are
partly generated during the negotiation process as the participants discuss
the proposed options. As each participant challenges each other's positions,
new beliefs and preferences are created. As the negotiation continues, a
rationale is often created in support of a particular outcome. For
example, in a conflict between Finance and Engineering about which of two
types of plant to build, the rationales supporting the outcome may
describe:
\begin{itemize}
\item
{\bf Predicted beneficial results:} A utility justifies the development
of a new plant: ``Nuclear power will cost less than burning fossil
fuel.''
\item
{\bf Policies guiding conduct:} The management of a utility makes a
policy: ``We must follow the rules and regulations of the Nuclear
Regulatory Commission in building our plant.''
\item
{\bf Reasons tied to specific institutional roles or processes:} A
utility sells a completed, ready to go, nuclear power plant to the state
government (which plans to demolish it) with the justification that the
state has agreed to compensate the utility in other ways.
\item
{\bf Precedent:} It is traditional to run diagnostics for the nuclear
reactor on Monday morning.
\end{itemize}
Precedent may seem like a weak rationale. However, deciding according
to precedent in the absence of strong alternatives has the
consequences of predictability and stability. In the absence of
strong alternatives, using precedent is usually less costly than
constantly redoing a decision process.
The rationale becomes part of the systems history, and may become
a precedent.
This taxonomy not only describes characteristics of outcome rationales,
it also provides criteria for identifying problems and pointing out ways
in which the process can be made more effective. So the rationale is
much more than the big cheese standing up and saying, ``This course of
action will lead to wonderful results.'' Any rationale can claim
beneficial results. However, the rationale will be judged on its own
merits. Good decisions can have bad rationales, and vice versa.
\subsection{Assessment of Systems Commitments}
No one can stand outside the system and assess as system's
commitments. Anyone who wants to understand the commitments in place
must become a player and participate in system processes. All such
assessments are made within a framework of conflict: allies,
commitments, inconsitency, limitations of resources, etc. The {\em
only way\/} to assess systems commitments to become part of the
system processes.
For example, the Nuclear Regulatory Commission abolished the requirement
that localities and states must approve emergency evacuation plans before
a nuclear reactor will be granted an operating license, in large part
because of local officials who were refusing to approve the plans. The
commission felt that communities were using the evacuation plant process
to prevent nuclear plants from receiving operating licenses. After the
NRC announced that it would abolish the requirement, many participants
challenged its authority in court. They criticized the commission for
changing the rules in the middle of the licensing process.
Meta-commitments in this instance set new policies and procedures (i.e.,
new commitments) about how commitments get changed. These new
commitments address concerns about how the previous licensing procedure
was carried out (i.e, that the communities had veto power over the
emergency evacuation plans). As a result of this trial of strength, the
commission created new procedures and policies for changing its
regulations so that in the future, various participants will participate in a
better-defined process. The new procedures and policies arose from a
meta-commitment negotiation, and formed a commitment about how to change
other commitments.
\subsection{Authority and Responsibility}
The meta-commitment described above changed the Nuclear Regulatory
Commission's authority and responsibilities. The utilities gained power
to participate in decisions about emergency evacuation plans, and the
commission took responsibility for spelling out its processes for
changing the licensing process.
Authority is power, and power is the ability to take action (i.e., use
resources). More precisely, authority is power legitimized by the
commitments of other authorities. For example, a utility has to register
with the Secretary of State in whichever state it operates, so its
organizational power is legitimized by the power of another authority,
namely the Secretary of State. If that authority withdraws this
legitimization, the utility's authority becomes problematical.
An organization's power is its control over resources, and its
responsibilities are its part in its commitments. (Accountability is
whether or not it actually takes those actions and meets those
commitments.) So authority and responsibility are both intimately
tied to an organization's commitments.
Authority can be delegated, but responsibility cannot. Responsibility
is established by the organization's undertaking a certain set of
commitments. An actor might get some help from other
participants in meeting those commitments, but they are still the
actor's own responsibility. So in the narrow sense, the actor cannot
delegate responsibility. What it can do instead is create other
organizational arrangements that also carry the same commitment---and
hope that will be sufficient. But if it's not, that commitment is
still that actor's responsibility.
\section{Relationship to Artificial Intelligence}
The question now arises as to the relationship between Open Systems
and Artificial Intelligence. Various technologies have been developed in
Artificial Intelligence for providing a foundation for and structuring of
computation, including:
\begin{itemize}
\item microtheories
\item problem spaces
\item taxonomies
\end{itemize}
\subsection{Microtheories}
One of the most powerful ideas of science is the idea of a {\em
microtheory}. Microtheories are based on a closed-world
assumption---from a set of rules specified in advance, results can be
algorithmically checked for correctness. A spread-sheet is a good
example. The rules are the calculation procedures used in the body of
the spread-sheet. Given the previous values and formulas, an
automaton can algorithmically check whether the new values are correct
in real time. Our notion of a microtheory is very general in that
encompasses all known forms of deduction including first order logic,
nth order logics, modal logics, intuitionistic logic, relevance logic,
lambda calculi, circumscription, default logics, etc [de Kleer, Kripke,
McDermott, McCarthy, Nilsson, Reiter, Sandewall, etc.].
A microtheory has important strengths:
\begin{itemize}
\item
It is portable. A microtheories can be expressed as a stable inscription
that does not spontaneously change and can easily be moved and copied.
\item
The correctness of a derivation is algorithmically decidable solely from
the text of the derivation. Something as simple as an automaton can
decide in real time whether or not a derivation is correct.
\end{itemize}
\noindent
Within a microtheory, there are well-defined methods for dealing with any
conflict that might arise. Thus, negotiations are not very important
within a microtheory because the correctness of derivation can be algorithmically
decidable in a closed world.
Microtheories play an important role in negotiations because they can be
brought to bear on issues and provide support for commitments with
respect to those issues. For example, the utility's Finance and
Engineering departments might each have a different spread-sheet
model of the utility's financial condition with respect to the costs of a
new plant, and each representative can then bring that microtheory to
the negotiation. Their respective recommendations of how the utility
should spend its money might be contradictory. Comparing their
microtheories can help to determine what some of the underlying
conflicts are. They might discover that Finance's Comptroller does not
believe that Engineering can meet its construction schedule. Derivations
of a microtheory can be brought to bear as supporting arguments in the
negotiation. In general, however, there will be a lot of these
microtheories, and they will often have derivations that
formally contradict derivations.
Each microtheory compiles certain methods in a rigorous way. The
Comptroller tries to protect the utility against financial
difficulties---and has successfully negotiated a commitment from
Engineering that the utility will not borrow money if payments will
exceed 25 percent of its income. On the other hand, the Engineering
department maintains that more generating capacity is needed and that
with their construction schedule, the debt payment will never exceed 25
percent.
Many of the microtheories embody various commitments and allies. For
example a spread-sheet microtheory derived from the tax code can be
used to deduce the tax consequences of differing proposals, and the
participant holding this microtheory can claim the IRS as an ally
(i.e., claiming that the IRS will support the conclusions drawn).
Thus, each side of any conflict (such as whether to pursue the
construction schedule developed by Engineering) will be able to marshall
its own body of microtheories, principles, precedents, and conclusions.
Having a deductive proof based on a microtheory usually does not thereby
carry the day and win the negotiation. The other participant will usually have
a competing microtheory. So microtheories are a strength, but they're
just one tool of negotiation. A powerful tool, but just one tool.
Having a microtheory facilitates negotiation, but in general does not
determine the outcome.
\subsection{The Role of Logical Deduction}
Several different negotiating strategies can be used with microtheories.
One is to include all the {\em if}s, {\em and}s, {\em but}s and {em
wherefore}s that one can imagine. This creates a cumbersome
microtheory that attempts to cover all possible special circumstances.
For example, ``If the utility uses more than 25 percent of its income for
debt payment, {\em and if furthermore\/} it does not have lots of liquid
assets, {\em and if furthermore\/} \ldots\ then the utility should not
take on more debt.''
A different strategy is to state a very simple rule and let it unfold in
the ongoing negotiation whether any exceptions apply. So the Comptroller
says: ``If construction is delayed, then the utility will spend much more
than 25 percent of its income for debt, so we shouldn't adopt the
construction plan.'' And the other participant replies, ``Yes,
but---Engineering has a good record for completing construction projects
on schedule at close to its estimated cost. Even though Engineering is
building a new kind of plant which can burn either coal or gas, it is not
very different from what it has built before.'' Having simple
microtheories that are parsimonious, easily understood, and clear in
their causality is often a better negotiating strategy than one which
tries to stipulate in advance all of the conditions which govern the
applicability of every rule.
The participants to a negotiation do not know for certain what sorts of
rationale for action and microtheories the others will present. The
applicability of one participant's microtheories depends on what happens during
the negotiation, not on the ability to assemble a large collection of
microtheories ahead of time. Either participant might come up with a
microtheory that the other has not thought of.
The Open Systems model of representations can be used to analyze a
deductive approach that has been explored in [McCarthy, etc.]
By inserting caveats into the axioms of conflicting microtheories,
interactions among the conditions of applicability of the axioms of the
microtheories can be expressed.
For example, various factors bear on the safety of the Diablo Canyon
nuclear reactor. Suppose that we attempt to conduct the negotiation by
writing rules with explicit caveats. Consider the following two rules:
\procedure %
{\rm Rule 1:}
if trained-operators and not(caveat-1), then safe-reactor
\endproc
\noindent
Having trained operators makes for a safe reactor unless it can be shown
that caveat-1 is true. Continuing to axiomatize,
\procedure %
{\rm Rule 2:}
if earthquake-zone and not(caveat-2), then not(safe-reactor)
\endproc
\noindent
Being in an earthquake zone means the reactor is not safe unless it can
be shown that caveat-2 is true.
In this way axioms for caveats can be developed over time
and gradually improved. For example, consider the following rule:
\procedure %
{\rm Interaction Rule 1:}
if trained-operators, then caveat-2
\endproc
\noindent
Having trained operators implies that being in an earthquake zone does
not necessarily imply that the reactor is unsafe.
\procedure %
{\rm Interaction Rule 2:}
if earthquake-zone, then caveat-1
\endproc
\noindent
Also, being in an earthquake zone implies that having trained operators
does not necessarily imply that the reactor is safe.
Unfortunately, the addition of Interaction Rules 1 and 2 blocks the
applicability of Rules 1 and 2. If we have both trained operators and
an earthquake zone, Rule 1 cannot be used since earthquake-zone implies
caveat-1, and Rule 2 cannot be used since trained-operators implies
caveat-2. The following axioms are needed instead:
\procedure %
{\rm Interaction Rule $1'$:}
if trained-operators and not(caveat-Interaction-Rule-1),
then caveat-2
\bline
{\rm{Interaction Rule $2'$:}}
if earthquake-zone and not(caveat-Interaction-Rule-2),
then caveat-1
\endproc
These interaction rules are needed to prevent contradiction and they can
be highly non-modular. For example, the use of interaction rules raises
the following question: Does being in an earthquake zone imply that the
presence of trained operators implies that being in an earthquake zone
implies that the reactor is safe? Questions like this can also be
expressed as rules:
\procedure %
{\rm{Second-Order Interaction Rule 1:}}
if trained-operators, then caveat-Interaction-Rule-2
\bline
{\rm{Second-Order Interaction Rule 2:}}
if earthquake-zone, then caveat-Interaction-Rule-1
\endproc
\noindent
Again, the Second-Order Interaction Rules 1 and 2 cannot be allowed to
stand. Instead, the following rules must be used:
\procedure %
{\rm{Second-Order Interaction Rule $1'$:}}
if trained-operator
and not(caveat-Second-Order-Interaction-Rule-1),
then caveat-Interaction-Rule-2
\bline
{\rm{Second-Order Interaction Rule $2'$:}}
if earthquake-zone
and not(caveat-Second-Order-Interaction-Rule-2),
then caveat-Interaction-Rule-1
\endproc
\noindent
At this point, it becomes very difficult to understand what we are talking
about.
Combining microtheories and adding caveats to rules does not resolve
all conflicts. In the power plant construction example, joining two
microtheories together with caveats leads to the derivation of: ``Yes,
the utility will get an operating license or no, the utility will not get an
operating license.'' But the utility cannot be told ``yes or no.'' Either
they must be told ``yes,'' or they must be told ``no.'' Adding caveats to
rules makes for a cumbersome negotiating strategy that does not
respond easily to changing circumstances.
Logical deduction can model the reasoning within a given microtheory, but
it cannot settle the dispute. Attempts to combine microtheories into a
larger theory by introducing caveats ultimately leads to ``yes or no''
derivations. So logical deduction is an appropriate and valuable tool
within the microtheories held by the respective participants. Quite often,
new microtheories will need to be created in order to better understand
the issues under negotiation. In general, these new microtheories will
not be logical consequences of the microtheories that were already
familiar to the participants before the negotiation began.
\subsection{Problem Spaces}
Problem spaces \cite{ProblemSpaces} can be used as a process modeling
technique. A problem space is a microtheory that provides for:
\begin{itemize}
\item
an {\bf initial state}. For example, the initial state may be the
financial state of the utility before it begins constructing a new plant
\item
one or more {\bf operators} that are applicable to each state. For
example, selling bonds is one of the operations that a utility can
usually take to change its financial condition.
\item
one or more {\bf success states} for one or more of the participants. For
example, the utility can specify its financial goals in terms of revenue,
investment, and earnings.
\end{itemize}
Problem spaces can also be used to model an ongoing negotiation.
Negotiation usually begins with discussion about issues. This process
can be viewed as negotiation about the initial state. Then there is
discussion about how the negotiation should proceed: who will speak, what
the agenda is, and so on---which is analogous to the possible operations
applicable to each state. Also, there is discussion about what
represents a successful outcome. A problem space attempts to model the initial
situation, trajectory, and criteria for when negotiation has ended.
Thus, problem spaces can be a useful way of characterizing an ongoing
process. However, we can rapidly encounter the same difficulty we had
with microtheories: each participant to the negotiation will have its own
problem space of how the negotiation should proceed which will in general
conflict with those of the other participants.
Consider for example the negotiating process to determine whether or
not to give a nuclear plant an operating license. The initial state of the
negotiations is quite problematical. Typically there are thousands of
pages of documentation and claims that have been submitted ahead of
time by the plant owners, the nearby local communities and states, by
environmental groups, and public utility commissions. One of the
participants might say, ``Okay, we are now in this particular initial
state; these are the ways that the negotiation can move forward from
the various states it might get into; and this is what we'll count as an
outcome.'' However, in this case the participants find that they agree
on very little about the starting state of the negotiations. In many
cases the problem spaces are not worked out in such an overt form.
Participants often come to a negotiation with {\em criteria for the
outcome\/} that they initially believe would represent success for
themselves (and possibly for others as well). Also, they would come
with their own understanding of the initial situation and what
negotiation moves would be legitimate. Furthermore in this case the
Nuclear Regulatory Commission decided to change the rules for dealing
with emergency evacuation plans in the middle of the negotiation
process; which neither it nor the other participants initially conceived
as a possible operation. The participants in this case differ greatly on
their characterization of the starting state, the allowable operations, or
which states constitute successful outcomes.
In order to use a problem space for conducting a negotiation, the
participants must first reach an agreement about the initial state of
the negotiation. In general it is difficult to come to an agreement
as to exactly what are the issues at stake. The representatives often
find it difficult to specify how the organization they represent would
characterize an issue. After some representatives have stated their
understanding of an issue, others may decide to change their
characterization. Thus attempting to precisely characterize the
initial state of a negotiation can be a very problematical undertaking
that has no termination and may even deadlock. To make the situation
even more problematical, a prolonged attempt to precisely specify the
initial state has the effect of itself changing the initial state!
Furthermore some of the representatives may not want to reveal all of
their current plans and understanding concerning of the issues under
negotiation for a variety of reasons--such as being in too preliminary
a state of development to share with others. Specifying in advance
the operators that are applicable at each stage of a negotiation as
well as the success states are equally problematical.
In many cases, the problem spaces are not worked out in such an overt
form. Participants often come to a negotiation with {\em criteria for
the outcome\/} that they initially believe would represent success for
themselves (and possibly for others as well). Also, they would come with
their own understanding of the initial situation and what negotiation
moves would be legitimate. In the case of the emergency evacuation
plans, however, the Nuclear Regulatory Commission decided to change the
rules for dealing with such plans in the middle of the negotiation
process; neither the NRC nor the other participants initially conceived of
this as a possible operation.
Again, having a problem space does not guarantee what's going to happen
because each participant brings its own problem space. Participants with
conflicting commitments about a negotiation will have conflicting
problem spaces. Basically, these are conflicts over commitments on how
the negotiation should proceed. The conflicts between these problem
spaces---much as in the cases of microtheories---need to be deal with.
Given the prospects for conflict among problem spaces, it would be
difficult (and perhaps not even desirable) to try to create a problem
space in advance that governs represent the entire, ongoing negotiation.
Instead, problem spaces are probably best used by each participant to
communicate its own analysis of the current negotiating situation,
available options, and desired outcomes.
\section{Conclusions}
The new discipline of Open Systems Science extends classical Artificial
Intelligence in several aspects:
\begin{itemize}
\item
In Open Systems Science, the primary (non-numerical) indicators are
systems {\em commitments}, {\em robustness} (the ability to keep
commitments in the face of conflict), and {\em scalability}
(commitments to increase the scale of systms commitments). Open
Systems Science is grounded in large scale information systems work.
The primary indicator of success in Artificial Intelligence is the
ability to impress humans with behavior that they will call
intelligent. It is grounded in intelligent agents and robots. In
contrast to Artificial Intelligence, work can proceed on the
development of foundations for Open Systems Science without the
need to provide a characterization of ``intelligence''.
\item
In Open Systems Science, {\em representation} is the activity of
communicating with others. Without communication there is no
representation. Communication takes its {\em meaning} from how it affects
the behavior of recipients. In Artificial Intelligence, representation is
traditionally about the correspondence between a structure in an intelligent
agent and a state of affairs in the world.
\item
Open Systems Science views {\em commitment} as a {\em joint course
of action} in which the participants are {\em on course}. The {\em
responsibility} of a participant is its part in the commitment. Artificial
Intelligence has traditionally viewed commitment as a state of mind in
which there is {\em intentionality}
\cite{Cohen-Levesque}\cite{Dennet}.
\end{itemize}
In summary social processes especially those of science, technology, and
engineering \cite{Latour} inform Open Systems Science, whereas Artificial
Intelligence has traditionally turned to neurophysiology, psychology, and
cognitive science.
Open Systems Science provides a framework for analyzing
Artificial Intelligence technologies such as deductive theories,
taxonomies, and dictionaries. Conflict is ubiquitous in Open Systems.
It allows participants to consider and explore their alternatives in a
way that takes other commitments into account. As the participants to
the conflict negotiate their differences, they usually generate
justifications to support their position. They often use microtheories to
bolster their cases. Since microtheories are decontextualized, they can
be carried from place to place and used to seek additional leverage in
many different negotiations. Thus, the use of inference in microtheories
can be seen as a natural kind of specialized activity that often occurs in
the negotiation of conflict. The crucial characteristics of a microtheory
are that the rules are given in advance and that the derivations can be
checked for correctness in real time. The nonmathematical
microtheories of the participants usually conflict. Negotiation of
the conflict that arises can be a source of creativity and robustness.
% \cite{open-systems}
% \cite{robustness-reliability-and-overdetermination}
% \cite{regions-of-the-mind}
\section{Acknowledgments}
First, I would like to express my gratitude and admiration to Wyn Snow
for editorial assistance above and beyond the call of ordinary duty.
Without her help, this paper would contain unboundedly many more
mindtraps.
Second, I wish to acknowledge the aid of Elihu Gerson in repeatedly
pulling me out of intellectual quicksand and setting me back on fruitful
paths.
Third, I wish to acknowledge the help of Les Gasser, David Kirsh, Bruno
Latour, John McCarthy, and Susan Leigh Star for pushing forward in new
directions as well as helping to reconceptualize old ones.
Fourth, I wish to thank members of the Message Passing Semantics
Group for helping to find obscurities and errors.
Fifth (and perhaps most important), I wish to thank Randy Fenstermacher,
Ron Flemming, Sue Gerson, Fanya Montalvo, John Stutz, and other close friends for
helping me to continue to grow.
\section{Related Work}
\vbox{\noindent
Hewitt, Carl,
``Viewing Control Structures as Patterns of Passing Messages,''
{\em A.I.\ Journal}, Vol.~8, No.~3, June 1977, pp.~323--364.}
This paper re-examined the issue of control structures in Artificial
Intelligence. Control structures were previously defined as looking for
the best choice in moving from the current global state to the next one.
The control structure was supposed to accomplish this either by guiding
the production system or by guiding a theorem-prover that was attempting
to search through the realm of possibilities. This paper pointed out that
traditional programming language control structures (such as iteration,
recursion and co-routine) could be analyzed in terms of patterns:
stereotypical or stylized patterns of communication among different
participants. Instead of looking at the behavior of an {\em individual\/}
intelligent agent as Newell and Simon did, this paper initiated the idea
that communities of people are a primary existence proof and analog for
how to extend these ideas.
\bigskip\vbox{\noindent
Kornfeld, William A.\ and Carl Hewitt, ``The Scientific Community
Metaphor,'' {\em IEEE Transactions on Systems, Man, and Cybernetics},
Vol.~SMC-11, No.~1, January 1981, pp.~24--33.}
This paper introduced several important concepts into the Artificial
Intelligence arena, and further develops the ideas Hewitt first discussed
in ``Viewing Control Structures.'' It uses the scientific community as a
model for the problem-solving process, and speaks generally about how
principles and mechanisms of scientific communities might be incorporated
into the problem-solving technology of Artificial Intelligence. Several
fundamental properties of scientific communities have nice analogs for
computing systems that aspire to intelligent behavior. Among these
properties are monotonicity, commutativity, parallelism and pluralism.
The paper also introduces the notion of having sceptics as well as
proponents of different kinds of ideas, and explicates how those kinds of
questions can be investigated concurrently.
\bigskip\vbox{\noindent
Kornfeld, William Arthur,
``Concepts in Parallel Problem Solving,''
Ph.D.\ Thesis, Dept.\ of EECS, MIT, February 1982.}
This is a further development of the work in ``The Scientific Community
Metaphor.'' Kornfeld here shows that by developing a concurrent process
that has critics as well as proponents of ideas, the amount of resources
consumed can, in some cases, be vastly reduced. This results in a kind of
combinatorial implosion instead of the usual combinatorial explosion where
the number of alternatives proliferate indefinitely. Such exponential
proliferation of possibilities is typical of backward-chaining reasoning.
The negotiation described here is very primitive in form, and consists of
entering absolute objections---a very cut-and-dried situation. We would
like to apply this type of process in more relaxed situations where one
has less hard knowledge, and the objections aren't guaranteed to be always
fatal to what they're objecting to.
\bigskip\vbox{\noindent
Barber, Gerald Ram{\'o}n,
``Office Semantics,''
Ph.D.\ thesis, Dept.\ of EECS, MIT, February 1982.}
This paper shows how the viewpoint mechanism introduced in Kornfeld and
Hewitt's ``Scientific Community Metaphor'' can be used to model changing
situations in terms of multiple points of view. It also introduces some
of the kinds of mechanisms for dealing with contradictory microtheories.
\bigskip\vbox{\noindent
Huberman, B.~A. editor,
{\em The Ecology of Computation}
North Holland, 1988}
This book is an excellent collection of articles which deal with the nature,
design, description, implementation, and management of Open Systems. The
articles are grouped in three major sections. Papers in the first section
deal with general issues underlying Open Systems, studies of computational
ecologies, and their similarities with social organizations. Papers in the
second section deal with implementation issues of distributed computation, and
those in the third section discuss the issues of developing suitable languages
and information media for Open Systems.
\bigskip\vbox{\noindent
Stefik, Mark,
``The Next Knowledge Medium,''
{\em The AI Magazine}, Vol.~7, No.~1, Spring 1986, pp.~34--46.}
Stefik describes the growth and spread of cultural knowledge: the kinds of
things that communities of humans do---and shows how the existence of a
technical infrastructure (such as railroads) can greatly facilitate and
accelerate cultural change. Our current knowledge market is static and
pretty much confined to inscriptions: things that can be reduced to a
string of bits (such as a diagram or sentence or literary work) and thus
transported and copied at very small price. Stefik portrays a dynamic
knowledge market that would supplement our current product market. It
would move intelligent models around that have the capability of taking
action. An active knowledge medium could interact with both its human
users and with various kinds of expert systems. He also describes several
current projects, such as the Co-Lab at Xerox Park, that are beginning to
show rudimentary characteristics of an active knowledge medium.
\bigskip\vbox{\noindent
Alvarado, Sergio J., Michael G.~Dyer and Margot Flowers,
``Editorial Comprehension in OpEd Through Argument Units,''
Fifth National Conference on Artificial Intelligence,
Aug 11--15, 1986, Philadelphia, PA,
{\em AAAI-86}, Vol.~1, pp.\ 250--256.}
This paper shows how arguments can be diagrammed in much the same way that
debate contests are often diagrammed by their judges. Such diagramming
examines the beliefs, the tree-structure of the supporting beliefs, and
the way one side can attack the other side's beliefs. (There are really
two kinds of important relationships between the two sides: support
relationships and attack relationships.) The paper presents an analysis
that looks at both the achievement of plans and goals, and the development
of editorials that critique other sides, showing how other sides have
beliefs that are supporting to the opinion that's being reported. This is
quite interesting work in terms of starting to build technology that can
do argument analysis, because that's an important component of
negotiation. Of course, there are other kinds of representations---as John
McCarthy pointed out---in terms of making threats and other kinds of
speech acts, but argument analysis is certainly a very important component
of negotiation.
\bigskip\vbox{\noindent
Devereux, Erik August,
``Processing Political Debate: A Methodology for Data Production
with Special Application to the Lincoln-Douglas Debates,''
B.S.\ thesis, MIT Dept.\ of Political Science, June 1985.}
Devereux expands on something very similar to the argument units in
Alvarado {\em et. al.}. Devereux takes the whole of the Lincoln-Douglas
debates and attempts to identify both attacking statements between Douglas
and Lincoln and supporting links within the individual arguments
themselves. Interestingly enough, there are no supporting links between
the two debaters, so in that respect, the argument units of Alvarado {\em
et al}.\ represent an advance over the analysis that was done by Devereux.
\begin{thebibliography}{9999}
\bibitem[Agha 1986]{agha-phd}
Agha, G., {\em Actors: A Model of Concurrent Computation in Distributed
Systems}, Cambridge, MA: MIT Press, 1986.
\bibitem[Becker 1960]{concept-of-commitment}
Becker, Howard S., ``Notes on the Concept of Commitment,'' {\em American
Journal of Sociology}, Vol.~66, July 1960, pp.~32--40.
\bibitem[Clinger 1981]{clinger-phd}
W.~D.~Clinger. {\em Foundations of Actor Semantics}. AI-TR-~633, MIT
Artificial Intelligence Laboratory, May 1981.\filbreak
\bibitem[Fikes 1982]{commitment-based-framework}
Fikes, Richard E., ``A commitment-based framework for describing informal
cooperative work,'' {\em Cognitive Science}, Vol.~6, 1982, pp.~331--347.
\bibitem[Gerson 1976]{quality-of-life}
Gerson, Elihu M., ``On the Quality of Life,'' {\em American Sociological
Review}, Vol.~41, October 1976, pp.~793--806.
\bibitem[Hewitt and Atkinson 1979]{serializers}
Hewitt, Carl and Atkinson, Russell,
``Specification and Proof Techniques for Serializers,''
{\em IEEE Transactions on Software Engineering}
Vol.~SE-5, No.~1, January 1979, pp.~10--23.
\bibitem[Hewitt and Baker 1977]{laws}
C.~Hewitt and H.~Baker. ``Laws for communicating parallel processes.''
In {\em 1977 IFIP Congress Proceedings}, IFIP, August 1977, pp.~987--992.\filbreak
\bibitem[Latour 1987]{science-in-action}
Latour, Bruno, {\em Science In Action}, Cambridge, MA: Harvard University
Press, 1987.
\bibitem[Shapiro 1987]{shapiro-cp}
E.~Shapiro. ``A subset of concurrent prolog and its interpreter.'' In
{\em Concurrent Prolog: Collected Papers}, Cambridge, MA: MIT Press,
1987, pp.~27--83.\filbreak
\bibitem[Smith and Davis 1981]{contract-nets}
Smith, R. and Davis, R., ``Frameworks for cooperation in distributed
problem solving,'' {\em IEEE Transactions on Systems, Man, and
Cybernetics}, Vol.~SMC-11, 1981, pp.~61--70.
\bibitem[Star 1983]{simplification-in-scientific-work}
Star, S.L., ``Simplification in scientific work: An example from
neuroscience research,'' {\em Social Studies of Science}, Vol.~13, No.~2,
1983, pp.~205--228.
\bibitem[Strauss 1978]{negotiations}
Strauss, Anselm, {\em Negotiations}, Jossey-Bass, 1978.
\bibitem[Theriault 1983]{theriault-masters}
D.~Theriault. {\em Issues in the Design and Implementation of Act2}.
Technical Report~728, MIT Artificial Intelligence Laboratory, June 1983.\filbreak
\bibitem[Winograd and Flores 1987]{winograd-flores}
Winograd, Terry, and Flores, Fernando, {\em Understanding Computers and
Cognition}, Reading, MA: Addison-Wesley, 1987.
\end{thebibliography}
\end{document}